An Implementation of Back-Propagation Learning on GF11, a Large SIMD Parallel Computer
نویسندگان
چکیده
Current connectionist simulations require huge computational resources. We describe a neural network simulator for the IBM GF11, an experimental SIMD machine with 566 processors and a peak arithmetic performance of 11 Gigaflops. We present our parallel implementation of the backpropagation learning algorithm, techniques for increasing efficiency, performance measurements on the NetTalk text-to-speech benchmark, and a performance model for the simulator. Our simulator currently runs the back-propagation learning algorithm at 900 million connections per second, where each “connection per second” includes both a forward and backward pass. This figure was obtained on the machine when only 356 processors were working; with all 566 processors operational, our simulation will run at over one billion connections per second. We conclude that the GF11 is well-suited to neural network simulation, and we analyze our use of the machine to determine which features are the most important for high performance. This research was performed at and supported by the IBM T.J. Watson Research Center, Yorktown Heights, NY 10598. The production of this Report was supported in part by Hughes Aircraft Corporation and National Science Foundation grant ECS-8716324. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the International Business Machines Corporation, the Hughes Aircraft Corporation, the National Science Foundation, or the U.S. Government.
منابع مشابه
Back-propagation learning algorithm and parallel computers: The CLEPSYDRA mapping scheme
This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation al...
متن کاملParallel implementation of underwater acoustic wave propagation using beamtracing method on graphical processing unit
The mathematical modeling of the acoustic wave propagation in seawater is the basis for realizing goals such as, underwater communication, seabed mapping, advanced fishing, oil and gas exploration, marine meteorology, positioning and explore the unknown targets within the water. However, due to the existence of various physical phenomena in the water environment and the various conditions gover...
متن کاملAn efficient implementation of a backpropagation learning algorithm on a Quadrics parallel supercomputer
A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given. Introduction In the beginning of the...
متن کاملPerformance of Connectionist Learning Algorithms on 2-D SIMD Processor Arrays
The mapping of the back-propagation and mean field theory learning algorithms onto a generic 2-D SIMD computer is described. This architecture proves to be very adequate for these applications since efficiencies close to the optimum can be attained. Expressions to find the learning rates are given and then particularized to the DAP array procesor.
متن کاملWeight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Parallel Computing
دوره 14 شماره
صفحات -
تاریخ انتشار 1990